Data Science vs Software Engineering - Tech Salary Career Analysis Part 1
A comparison of DS and SWE salary data for people trying to choose their career path.
This post is a Jupyter Notebook! Check out my GitHub to play with the source code.
A common question I see online in the various places data scientists and software engineers meet to discuss how nerdy we are is "who makes more money?" I've seen a lot of claims thrown around without too much evidence to really substantiate anything, which is odd considering data scientists are part of the conversation. In order to put the arguments to rest, I decided to do a little digging to see if I could put the arguments to rest once and for all.
Levels.fyi is a well known aggregator of salary data for the tech sector. Granted, using data from levels.fyi might not be the best sampling methodology because it only contains data from people that self-report their salaries, but it's what was easily available. At any rate, it should suffice for a blog post.
I built a VERY quick and dirty webscraper in order to collect relevant salary data from their website using Selenium and BeautifulSoup. The code for the webscraper can be viewed here. Note that for the purposes of this analysis the regions have been limited to the US.
During the scraping process I had been having some issues with a spotty internet connection, so while scraping the data I was frequently saving it to disk in case of any errors. I then aggregated all the data together by career track into the files that can be seen in this folder, using the script found here. All told I ended up scraping data for the following career tracks:
- Data Scientist
- Product Manager
- Recruiter
- Sales
- Software Engineer
- Software Engineering Manager
- Technical Program Manager
With the data collected, we can get to work analysing it! The first thing we have to do is import the relevant libraries, and explore the data for any outliers. My preferred tool for this is called pandas-profiling, and its use is showcased below.
import pandas as pd
import numpy as np
import plotly.express as px
import plotly.graph_objects as go
from pandas_profiling import ProfileReport
DS_FILE_LOCATION = "https://raw.githubusercontent.com/borsboomT/levels_scraper/main/data/DataScientist_completeCSV.csv"
PM_FILE_LOCATION = "https://raw.githubusercontent.com/borsboomT/levels_scraper/main/data/ProductManager_completeCSV.csv"
REC_FILE_LOCATION = "https://raw.githubusercontent.com/borsboomT/levels_scraper/main/data/Recruiter_completeCSV.csv"
SAL_FILE_LOCATION = "https://raw.githubusercontent.com/borsboomT/levels_scraper/main/data/Sales_completeCSV.csv"
SWE_FILE_LOCATION = "https://raw.githubusercontent.com/borsboomT/levels_scraper/main/data/SoftwareEngineer_completeCSV.csv"
SWEM_FILE_LOCATION = "https://raw.githubusercontent.com/borsboomT/levels_scraper/main/data/SoftwareEngineeringManager_completeCSV.csv"
TPM_FILE_LOCATION = "https://raw.githubusercontent.com/borsboomT/levels_scraper/main/data/TechnicalProgramManager_completeCSV.csv"
raw_DS_df = pd.read_csv(DS_FILE_LOCATION)
# This ensures that the numerical columns are interpreted as numeric columns by
# converting all N/A values to NaN.
numeric_cols = [
"Total Compensation",
"Stock Comp",
"Base Comp",
"Bonus Comp",
"YOE Total",
"YOE At Company",
]
raw_DS_df[numeric_cols] = raw_DS_df[numeric_cols].apply(
pd.to_numeric, errors="coerce"
)
# We also round and convert the experience columns to integer values
# for easier grouping later on.
exp_cols = ["YOE Total", "YOE At Company"]
raw_DS_df[exp_cols] = raw_DS_df[exp_cols].apply(np.round)
raw_DS_df[exp_cols] = raw_DS_df[exp_cols].astype(int)
profile = ProfileReport(raw_DS_df, title="Pandas Profiling Report")
profile.to_notebook_iframe()
This report gives us a good overview of the way our data looks. We can see the number of distinct values for categorical features, as well as the distribution of values for numerical features. By clicking the "Toggle Details" button for the numerical features we can also seem some general statistics about them.
Data Cleaning and Outlier Detection
This data clearly has some fairly substantial outliers. These data points may well be real, after all the salaries in tech are known for being quite high and somewhat erratic, but we're more interested in the data as it pertains to the average person. For this reason, we're going to remove the outliers using the interquartile range (IQR) method. We need to be careful though, as we expect different salaries depending on the number of years of experience someone has. That means we need to group the salaries by experience level before we do any data pruning.
YOE_vals = raw_DS_df["YOE Total"].unique()
no_outlier_df_list = []
for YOE in YOE_vals:
# We filter the dataframe based on the total years of experience.
df = raw_DS_df[raw_DS_df["YOE Total"] == YOE]
# We get the upper and lower quantile data for the numeric columns.
lower_quant = df.quantile(0.25)
upper_quant = df.quantile(0.75)
IQR = upper_quant - lower_quant
cut_off = IQR * 1.5
lower_cut_off, upper_cut_off = lower_quant - cut_off, upper_quant + cut_off
# We filter the data based on those quantiles
quant_filter = (
df[numeric_cols] > (lower_cut_off)) & (
df[numeric_cols] < (upper_cut_off)
)
df = df[quant_filter.any(axis=1)]
# We save the filtered dataframe by adding it to a list
no_outlier_df_list.append(df)
# Once all the outliers have been filtered, we concatenate the filtered dataframes
# back into a single dataframe.
no_outliers_DS_df = pd.concat(no_outlier_df_list)
no_outliers_DS_df.reset_index(inplace=True, drop=True)
We also have a lot of missing data points. Since we'll only really be using aggregates to look at this data, we can handle the missing data points using imputation. In other words, we'll replace the missing data with the median value for each feature. Again, we have to be careful by first grouping the data based on the total years of experience. Note: Here we're looping over the years of experience again, typically in order to make the process more efficient we would merge this task and the previous one into a single task so that we only have to loop through the data a single time.
imputed_df_list = []
for YOE in YOE_vals:
# We filter the dataframe based on the total years of experience.
df = no_outliers_DS_df[no_outliers_DS_df["YOE Total"] == YOE]
# We fill the NaN cells with the imputed median values
df.fillna(df.median(), inplace=True)
# We save the imputed dataframe by adding it to a list
imputed_df_list.append(df)
# Once all the outliers have been filtered, we concatenate the filtered dataframes
# back into a single dataframe.
imputed_DS_df = pd.concat(imputed_df_list)
imputed_DS_df.reset_index(inplace=True, drop=True)
Lets check out the data to make sure everything is looking alright.
profile = ProfileReport(imputed_DS_df, title="Pandas Profiling Report")
profile.to_notebook_iframe()
Much better! Since we have to do this for each individual career track dataset, we'll combine the process into a single callable function.
def clean_salary_data(raw_df):
# This ensures that the numerical columns are interpreted as numeric columns by
# converting all N/A values to NaN.
numeric_cols = [
"Total Compensation",
"Stock Comp",
"Base Comp",
"Bonus Comp",
"YOE Total",
"YOE At Company",
]
raw_df[numeric_cols] = raw_df[numeric_cols].apply(
pd.to_numeric, errors="coerce"
)
# We also round and convert the experience columns to integer values for easier grouping later on.
exp_cols = ["YOE Total", "YOE At Company"]
raw_df.replace([np.inf, -np.inf], np.nan, inplace=True)
raw_df.dropna(subset=exp_cols, how="all", inplace=True)
raw_df[exp_cols] = raw_df[exp_cols].apply(np.round)
raw_df[exp_cols] = raw_df[exp_cols].astype(int)
YOE_vals = raw_df["YOE Total"].unique()
cleaned_df_list = []
for YOE in YOE_vals:
# We filter the dataframe based on the total years of experience.
df = raw_df[raw_df["YOE Total"] == YOE]
# We get the upper and lower quantile data for the numeric columns.
lower_quant = df.quantile(0.25)
upper_quant = df.quantile(0.75)
IQR = upper_quant - lower_quant
cut_off = IQR * 1.5
lower_cut_off, upper_cut_off = (
lower_quant - cut_off,
upper_quant + cut_off,
)
# We filter the data based on those quantiles
quant_filter = (df[numeric_cols] > (lower_cut_off)) & (
df[numeric_cols] < (upper_cut_off)
)
df = df[quant_filter.any(axis=1)]
# We fill the NaN cells with the imputed median values
df.fillna(df.median(), inplace=True)
# We save the cleaned dataframe by adding it to a list
cleaned_df_list.append(df)
cleaned_df = pd.concat(cleaned_df_list)
cleaned_df.reset_index(inplace=True, drop=True)
return cleaned_df
Alright, lets get the Data Scientist and Software Engineer data into the worksheet and clean it using our new function.
cleaned_DS_df = clean_salary_data(pd.read_csv(DS_FILE_LOCATION))
cleaned_SWE_df = clean_salary_data(pd.read_csv(SWE_FILE_LOCATION))
Now that we have the relevant data, and it's reasonably clean, let's take a look at the total year compensation as a function of the number of years of experience for both DS and SWE employees. Unfortunately, there isn't much data for DS beyond the 10 year mark, so that's the maximum year we will take our comparison to.
tot_years_DS_df = cleaned_DS_df.groupby(["YOE Total"]).agg(
["mean", "count", "std", "median"]
)
tot_years_SWE_df = cleaned_SWE_df.groupby(["YOE Total"]).agg(
["mean", "count", "std", "median"]
)
fig = go.Figure()
fig.add_trace(
go.Scatter(
name="DS_med",
x=tot_years_DS_df.index,
y=tot_years_DS_df["Total Compensation"]["median"],
mode="markers",
)
)
fig.add_trace(
go.Scatter(
name="SWE_med",
x=tot_years_SWE_df.index,
y=tot_years_SWE_df["Total Compensation"]["median"],
mode="markers",
)
)
fig.update_layout(
xaxis_title="Total Years of Experience",
yaxis_title="Total Yearly Compensation (USD)",
xaxis_range=[-0.5, 10.5],
yaxis_range=[150000, 350000],
)
fig.update_layout(showlegend=True)
fig.show()
Based on this comparison it appears that SWE do make slightly more than DS, the median values are clearly somewhat higher for the SWE. Let's quantify that difference using a 95% confidence interval.
salary_diff_DS_SWE = (
tot_years_SWE_df["Total Compensation"]["median"].iloc[:11]
- tot_years_DS_df["Total Compensation"]["median"].iloc[:11]
)
count_diff = salary_diff_DS_SWE.count()
mean_diff, std_diff = salary_diff_DS_SWE.mean(), salary_diff_DS_SWE.std()
conf_interval = 1.96 * std_diff / np.sqrt(count_diff)
print(
"Estimated Income Difference Between DS and SWE = {} +/- {}".format(
round(mean_diff, 2), round(conf_interval, 2)
)
)
This illustrates that while it does appears that SWE make more money than DS given the same number of years of experience, a difference of $0 is within the 95% confidence interval indicating that we cannot necessarily say that for certain. Ultimately, the median income of both career paths is quite similar and if you have decided to take either path you're going to end up doing just fine. At the end of the day you need to find something you enjoy doing, and worry less about what appears to be a marginal difference in salary at best.
Up Next
Ever consider the difference between job hopping and employer tenure? A career as a recruiter? A big move to the Bay Area? Now that we have a nice function for cleaning this data up, we'll be doing a more thorough analysis of the factors that impact salary. Maybe we'll be able to answer a few of your burning questions.